Goto

Collaborating Authors

 weighting function


DA W: Exploring the Better Weighting Function for Semi-supervised Semantic Segmentation Supplementary Material Rui Sun 1 Huayu Mai

Neural Information Processing Systems

In the supplementary material, we first introduce the pseudo algorithm of DA W . Then we clarify the Then, we provide a more detailed explanation of Figures 1, 2, 4, and 5, which are slightly abbreviated due to the limited space of the main paper. In the naive pseudo-labeling method, all pseudo labels are enrolled into training, i.e., E 1 + E 2, which is guaranteed by theoretical functional analysis in the next section. Inequality 45 holds true at all times. In this section, we provide more qualitative results between ours and other competitors.




A Attribution methods for Concepts

Neural Information Processing Systems

In our case, it boils down to: ' The smoothing effect induced by the average helps to reduce the visual noise, and hence improves the explanations. For the experiment, m and are the same as SmoothGrad. We start by deriving the closed form of Saliency (SA) and naturally Gradient-Input (GI): ' The case of V arGrad is specific, as the gradient of a linear system being constant, its variance is null. W We recall that for Gradient Input, Integrated Gradients, Occlusion, ' It was quickly realized that they unified properties of various domains such as graph theory, linear algebra or geometry. Later, in the '60s, a connection was made At each step, the insertion metric selects the concepts of maximum score given a cardinality constraint.



Meta-Weight-Net: Learning an Explicit Mapping For Sample Weighting

Neural Information Processing Systems

Current deep neural networks(DNNs) can easily overfit to biased training data with corrupted labels or class imbalance. Sample re-weighting strategy is commonly used to alleviate this issue by designing a weighting function mapping from training loss to sample weight, and then iterating between weight recalculating and classifier updating. Current approaches, however, need manually pre-specify the weighting function as well as its additional hyper-parameters. It makes them fairly hard to be generally applied in practice due to the significant variation of proper weighting schemes relying on the investigated problem and training data. To address this issue, we propose a method capable of adaptively learning an explicit weighting function directly from data. The weighting function is an MLP with one hidden layer, constituting a universal approximator to almost any continuous functions, making the method able to fit a wide range of weighting function forms including those assumed in conventional research. Guided by a small amount of unbiased meta-data, the parameters of the weighting function can be finely updated simultaneously with the learning process of the classifiers. Synthetic and real experiments substantiate the capability of our method for achieving proper weighting functions in class imbalance and noisy label cases, fully complying with the common settings in traditional methods, and more complicated scenarios beyond conventional cases. This naturally leads to its better accuracy than other state-of-the-art methods.


Double Machine Learning of Continuous Treatment Effects with General Instrumental Variables

Chen, Shuyuan, Zhang, Peng, Cui, Yifan

arXiv.org Machine Learning

Estimating causal effects of continuous treatments is a common problem in practice, for example, in studying dose-response functions. Classical analyses typically assume that all confounders are fully observed, whereas in real-world applications, unmeasured confounding often persists. In this article, we propose a novel framework for local identification of dose-response functions using instrumental variables, thereby mitigating bias induced by unobserved confounders. We introduce the concept of a uniform regular weighting function and consider covering the treatment space with a finite collection of open sets. On each of these sets, such a weighting function exists, allowing us to identify the dose-response function locally within the corresponding region. For estimation, we develop an augmented inverse probability weighting score for continuous treatments under a debiased machine learning framework with instrumental variables. We further establish the asymptotic properties when the dose-response function is estimated via kernel regression or empirical risk minimization. Finally, we conduct both simulation and empirical studies to assess the finite-sample performance of the proposed methods.


DAW: Exploring the Better Weighting Function for Semi-supervised Semantic Segmentation

Neural Information Processing Systems

The critical challenge of semi-supervised semantic segmentation lies in how to fully exploit a large volume of unlabeled data to improve the model's generalization performance for robust segmentation. Existing methods tend to employ certain criteria (weighting function) to select pixel-level pseudo labels. However, the trade-off exists between inaccurate yet utilized pseudo-labels, and correct yet discarded pseudo-labels in these methods when handling pseudo-labels without thoughtful consideration of the weighting function, hindering the generalization ability of the model. In this paper, we systematically analyze the trade-off in previous methods when dealing with pseudo-labels. We formally define the trade-off between inaccurate yet utilized pseudo-labels, and correct yet discarded pseudo-labels by explicitly modeling the confidence distribution of correct and inaccurate pseudo-labels, equipped with a unified weighting function. To this end, we propose Distribution-Aware Weighting (DAW) to strive to minimize the negative equivalence impact raised by the trade-off. We find an interesting fact that the optimal solution for the weighting function is a hard step function, with the jump point located at the intersection of the two confidence distributions. Besides, we devise distribution alignment to mitigate the issue of the discrepancy between the prediction distributions of labeled and unlabeled data. Extensive experimental results on multiple benchmarks including mitochondria segmentation demonstrate that DAW performs favorably against state-of-the-art methods.


Demystifying Diffusion Objectives: Reweighted Losses are Better Variational Bounds

Shi, Jiaxin, Titsias, Michalis K.

arXiv.org Machine Learning

We derive a new theoretical interpretation of the reweighted losses that are widely used for training diffusion models. Our method is based on constructing a cascade of time-dependent variational lower bounds on the data log-likelihood, that provably improves upon the standard evidence lower bound and results in reduced data-model KL-divergences. Combining such bounds gives rise to reweighted objectives that can be applied to any generative diffusion model including both continuous Gaussian diffusion and masked (discrete) diffusion models. Then, we showcase this framework in masked diffusion and report significant improvements over previous training losses in pixel-space image modeling, approaching sample quality comparable to continuous diffusion models. Our results also provide a theoretical justification for the simple weighting scheme widely used in masked image models.